Goto

Collaborating Authors

 lifted probabilistic inference


On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference

Neural Information Processing Systems

Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the first-order level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models. We then show how to obtain a completeness result using a first-order knowledge compilation approach for theories of formulae containing up to two logical variables.


On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference

Neural Information Processing Systems

Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the first-order level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models. We then show how to obtain a completeness result using a first-order knowledge compilation approach for theories of formulae containing up to two logical variables.


On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference

Broeck, Guy

Neural Information Processing Systems

Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the first-order level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models.


Knowledge Compilation for Lifted Probabilistic Inference: Compiling to a Low-Level Language

Kazemi, Seyed Mehran (University of British Columbia) | Poole, David (University of British Columbia)

AAAI Conferences

Algorithms based on first-order knowledge compilation are currently the state-of-the-art for lifted inference. These algorithms typically compile a probabilistic relational model into an intermediate data structure and use it to answer many inference queries. In this paper, we propose compiling a probabilistic relational model directly into a low-level target (e.g., C or C++) program instead of an intermediate data structure and taking advantage of advances in program compilation. Our experiments represent orders of magnitude speedup compared to existing approaches.


Conditioning in First-Order Knowledge Compilation and Lifted Probabilistic Inference

Broeck, Guy Van den (KU Leuven) | Davis, Jesse (KU Leuven)

AAAI Conferences

Knowledge compilation is a powerful technique for compactly representing and efficiently reasoning about logical knowledge bases. It has been successfully applied to numerous problems in artificial intelligence, such as probabilistic inference and conformant planning. Conditioning, which updates a knowledge base with observed truth values for some propositions, is one of the fundamental operations employed for reasoning. In the propositional setting, conditioning can be efficiently applied in all cases. Recently, people have explored compilation for first-order knowledge bases. The majority of this work has centered around using first-order d-DNNF circuits as the target compilation language. However, conditioning has not been studied in this setting. This paper explores how to condition a first-order d-DNNF circuit. We show that it is possible to efficiently condition these circuits on unary relations. However, we prove that conditioning on higher arity relations is #P-hard. We study the implications of these findings on the application of performing lifted inference for first-order probabilistic models.This leads to a better understanding of which types of queries lifted inference can address.


On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference

Broeck, Guy

Neural Information Processing Systems

Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the first-order level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models. We then show how to obtain a completeness result using a first-order knowledge compilation approach for theories of formulae containing up to two logical variables.


Leveraging Ontologies for Lifted Probabilistic Inference and Learning

Kiddon, Chloe Marielle (University of Washington) | Domingos, Pedro (University of Washington)

AAAI Conferences

Exploiting ontologies for efficient inference is one of the most widely studied topics in knowledge representation and reasoning. The use of ontologies for probabilistic inference, however, is much less developed. A number of algorithms for lifted inference in first-order probabilistic languages have been proposed, but their scalability is limited by the combinatorial explosion in the sets of objects that need to be considered. We propose a coarse-to-fine inference approach that leverages a class hierarchy to combat this problem. Starting at the highest level, our approach performs inference at successively finer grains, pruning low-probability atoms before refining. We provide bounds on the error incurred by this approach relative to full ground inference as a function of the pruning threshold. We also show how to learn parameters in a coarse-to-fine manner to maximize the opportunities for pruning during inference. Experiments on link prediction and biomolecular event prediction tasks show our method can greatly improve the scalability of lifted probabilistic inference.